Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A multimodal dataset of spontaneous speech and movement production on object affordances

Identifieur interne : 000313 ( Main/Exploration ); précédent : 000312; suivant : 000314

A multimodal dataset of spontaneous speech and movement production on object affordances

Auteurs : Argiro Vatakis [Grèce] ; Katerina Pastra [Grèce]

Source :

RBID : PMC:4718047

Abstract

In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of ‘thinking aloud’, spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.


Url:
DOI: 10.1038/sdata.2015.78
PubMed: 26784391
PubMed Central: 4718047


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A multimodal dataset of spontaneous speech and movement production on object affordances</title>
<author>
<name sortKey="Vatakis, Argiro" sort="Vatakis, Argiro" uniqKey="Vatakis A" first="Argiro" last="Vatakis">Argiro Vatakis</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,
<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,
<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Institute for Language and Speech Processing (ILSP), ‘Athena’ Research Center</institution>
, 15125 Athens,
<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26784391</idno>
<idno type="pmc">4718047</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4718047</idno>
<idno type="RBID">PMC:4718047</idno>
<idno type="doi">10.1038/sdata.2015.78</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000611</idno>
<idno type="wicri:Area/Pmc/Curation">000611</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000177</idno>
<idno type="wicri:source">PubMed</idno>
<idno type="wicri:Area/PubMed/Corpus">000125</idno>
<idno type="wicri:Area/PubMed/Curation">000125</idno>
<idno type="wicri:Area/PubMed/Checkpoint">000158</idno>
<idno type="wicri:Area/Ncbi/Merge">003F29</idno>
<idno type="wicri:Area/Ncbi/Curation">003F29</idno>
<idno type="wicri:Area/Ncbi/Checkpoint">003F29</idno>
<idno type="wicri:Area/Main/Merge">000313</idno>
<idno type="wicri:Area/Main/Curation">000313</idno>
<idno type="wicri:Area/Main/Exploration">000313</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">A multimodal dataset of spontaneous speech and movement production on object affordances</title>
<author>
<name sortKey="Vatakis, Argiro" sort="Vatakis, Argiro" uniqKey="Vatakis A" first="Argiro" last="Vatakis">Argiro Vatakis</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,
<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Cognitive Systems Research Institute (CSRI)</institution>
, 11525 Athens,
<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>Institute for Language and Speech Processing (ILSP), ‘Athena’ Research Center</institution>
, 15125 Athens,
<country>Greece</country>
</nlm:aff>
<country xml:lang="fr">Grèce</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Scientific Data</title>
<idno type="eISSN">2052-4463</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of ‘thinking aloud’, spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author>
<name sortKey="Pastra, K" uniqKey="Pastra K">K. Pastra</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<affiliations>
<list>
<country>
<li>Grèce</li>
</country>
</list>
<tree>
<country name="Grèce">
<noRegion>
<name sortKey="Vatakis, Argiro" sort="Vatakis, Argiro" uniqKey="Vatakis A" first="Argiro" last="Vatakis">Argiro Vatakis</name>
</noRegion>
<name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
<name sortKey="Pastra, Katerina" sort="Pastra, Katerina" uniqKey="Pastra K" first="Katerina" last="Pastra">Katerina Pastra</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000313 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000313 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     PMC:4718047
   |texte=   A multimodal dataset of spontaneous speech and movement production on object affordances
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:26784391" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024